System-on-chip (SoC) design service providers are under growing pressure to maximize flexibility in their designs and services. With the increased flexibility comes the need for more careful verification of the complete system. And emulation becomes central to verification.
Clearly, when it comes to discussing SoC design, functional verification is one of the most highly charged subjects, and our Programmable Universal Controller device was no different. The SoC was aimed at the portable consumer appliances market, targeting such things as MP3 players and mobile phones. We spent a lot of time on feasibility studies trying to optimize requirements.
A structured, well-planned functional verification approach accounted for 40 percent of the project's overall budget. As a result, the customer could process MP3 music on the device within a week of Tality's delivering initial samples.
The device had several key requirements. First, as its name suggests, the Programmable Universal Controller had to be flexible to ensure it supported as many portable applications as possible. In addition, with the device targeted at portable applications, power consumption was a key consideration. This resulted in the implementation of several power-management modes and a powerful and extremely complex clocking scheme.
As with many such devices, data security is another key consideration and as a result, a great deal of time was spent to ensure that data stored in the embedded memory would be carefully partitioned and protected.
The device was divided into three areas: processor subsystem, communication interfaces and management modules.
The overall architecture is based on the ARM7TDMI system processor (Fig. 1). Embedded memory and certain peripherals were provided to support the operation of the processor. These included UART, timers, real-time clock (RTC), watchdog timer and the interrupt controller. Application-specific communication peripherals including USB function core, GPIO, UART and synchronous serial ports (SSPs) were provided to facilitate connection to external devices. And application-specific management aspects of the device resulted in the design of power, memory protection, security, reset and clock-management modules.
The higher-bandwidth peripherals are on the Amba High-Performance Bus (AHB) and include the USB core and memory modules. The AHB had a maximum clock frequency of 64 MHz. To ensure that the AHB peripherals adhered to the ARM Amba standard, the customer supplied the USB and the silicon vendor library provider-in this case Artisan Components (Sunnyvale, Calif.)-supplied the
memory modules. This meant special wrappers had to be designed. In addition, the memory protection unit was designed to manage memory partitioning.
It can be seen that one of the biggest challenges that had to be carefully managed was the commissioning of so many different pieces of IP from many different sources. It is not possible and should not be necessary to exhaustively verify all functional modules within an SoC design like the one under discussion. After all, the main driving factor behind IP reuse is to reduce time-to-market.
As discussed earlier, the design contained IP from several different sources as well as virgin logic. Although it is important for new logic to be exhaustively tested, pre-existing silicon-hardened modules, when integrated, need only have their interfaces verified to ensure that integration has been completed successfully. This was the approach for the controller design.
Comprehensive tests
A simulation regression suite was built up that comprehensively tested all the device interfaces and new logic. But at some point a regression suite becomes unmanageable due to long run-times. Also, the returns can diminish, since there is often a lot of duplication in the areas being targeted by the tests. Judging when to stop testing is difficult because there is always a concern that bugs may still exist.
A common approach when simulating a design of this type is to use a staged methodology. Although it is important to adopt this staged-simulation approach when developing an SoC that includes an embedded processor, it does not guarantee that the device will function correctly once software is applied. It is generally accepted that simulation alone is not the answer. Once software is running on the silicon it will exercise the device more exhaustively and in different ways than was achieved using the simulation environment.
With devices like the Programmable Universal Controller, the benefits associated with having an embedded processor are often overlooked. The strategy that we adopted in the simulation environment was to use assembler-based tests to target specific requirements. This included memory-mapping the testbench modules so that all aspects-configuration, pass/fail checking and performance-of the verification process were controlled via the assembly code itself.
This is a well-honed approach we have adopted. However, despite carrying out these different levels of simulation and adding code coverage to ensure test coverage is adequate, the fact that software will run on this device cannot be overlooked. After all, it makes up an integral part of the system.
Emulation was another powerful verification technique adopted in the controller project. We commissioned an off-the-shelf emulation board that contained a bonded-out ARM processor. This is a low-cost but nevertheless effective emulation option. One of the main advantages was its portability, enabling the customer to carry out initial software development. It also allowed us to carry out more exhaustive testing of the design RTL, as the data throughput was obviously much higher than could be achieved in simulation environment.
The main driving force behind the use of emulation environment from the customer's point of view was that the customer, and indeed the customer's customers, could have a facility to develop application code. That was successful. Once emulation-related testing and software development are introduced, the number of bugs should decline. But the complexity and severity of the bugs uncovered increases. The most complex bugs can often be uncovered only by the use of a technique such as emulation. The main objective in using emulation was to exercise the RTL more exhaustively before committing to silicon. Therefore the priority was to apply unmodified RTL to the FPGA.
As with any emulation approach, large or small, low-cost or expensive, certain trade-offs are necessary. System speeds have to be scaled, and as was the case with our emulation approach, build options had to be considered, as the capacity in the single Altera FPGA is finite. There was duplication within the device in the form of the SSP modules. It was not necessary to emulate all of these. Two configurations were considered, the details of which can be seen in Table 1.
Emulation work on the controller proved to be invaluable. Table 2 lists the uncovered bugs and highlighted inefficiencies in the modules.
Clock speeds
Looking at the RTC-related bug, we managed to check this by increasing the speed of the source clock to the RTC. This let us check all aspects of the real-time clock within minutes, rather than waiting real-time. The RTC clock has a default of 1 Hz that results in the correct operation of the RTC. However, writing certain values to the RTC clock divider in the PMU increased this clock frequency. Table 3 lists the values and corresponding frequencies. All other values result in a clock frequency of 1 Hz. So by speeding up the RTC module and by configuring the UART within the FPGA to output the time in hours, minutes and seconds, it was possible to view the RTC cycling through its full range. It became apparent, however, that the clock was wrapping at 19 hours, 59 minutes and 59 seconds.
Now you would be forgiven for thinking that this problem should have been uncovered during simulation and that a fundamental issue such as this should not have slipped through the net. However, we did miss it and it was only our more extensive emulation tests that caught the failure. To misquote a well-used phrase, hindsight is not a great thing when a customer has committed hundreds of thousands of dollars on IC masks and a prototype device, only to have the device classed as unserviceable due to a small bug such as this. It is therefore imperative that we make our testing as exhaustive as time and budget permit.
Although we have a recognized flow, it is not infallible and it would be arrogant to suggest it ever will be. Therefore we must constantly review and enhance the flow to stay ahead of the ever-increasing verification challenges.
As with all SoC projects, the interesting aspects are often the most challenging aspects. Although I have touched on one area of the controller design, there were many more challenges, including:
Design-for-test issues, as a result of the embedded memory testing.
STA issues, as a result of the many asynchronous interfaces that existed.
Physical implementation issues, as a result of multiple bonding options.
As customers look to get more and more from their SoC designs, these are exactly the kinds of issues that they need solutions for, and it is necessary to work closely with the customer and silicon vendors to achieve as many, if not all the initial objectives.
The net result of this engagement was that the customer was able to demonstrate the final application within a week of receiving initial samples. The device is about to be released for full production.
---
As an ASIC designer for 10 years, David Morrison knows something about verifying an SoC. Morrison manages ARM-based SoC designs at Tality Design Services in Livingston, Scotland. He has a BEng Hons degree in electronic engineering from Napier University (Edinburgh, Scotland).
http://www.isdmag.com
© 2001 CMP Media LLC.
11/1/01, Issue # 13149, page 28.